We introduce two first-order graph-based dependency parsers achieving a newstate of the art. The first is a consensus parser built from an ensemble ofindependently trained greedy LSTM transition-based parsers with differentrandom initializations. We cast this approach as minimum Bayes risk decoding(under the Hamming cost) and argue that weaker consensus within the ensemble isa useful signal of difficulty or ambiguity. The second parser is a"distillation" of the ensemble into a single model. We train the distillationparser using a structured hinge loss objective with a novel cost thatincorporates ensemble uncertainty estimates for each possible attachment,thereby avoiding the intractable cross-entropy computations required byapplying standard distillation objectives to problems with structured outputs.The first-order distillation parser matches or surpasses the state of the arton English, Chinese, and German.
展开▼